Welcome to the newest installment of our educational Next Level series! In our last post, Brian Childs offered up a beginner-level workflow to help discover your competitor’s backlinks. Today, we’re welcoming back Next Level veteran Jo Cameron to show you how to find low-quality pages on your site and decide their new fate. Read on and level up!
With an almost endless succession of Google updates fluctuating the search results, itβs pretty clear that substandard content just wonβt cut it.
I know, I know β we canβt all keep up with the latest algorithm updates. Weβve got businesses to run, clients to impress, and a strong social media presence to maintain. After all, you havenβt seen a huge drop in your traffic. Itβs probably OK, right?
So whatβs with the nagging sensation down in the pit of your stomach? Itβs not just that giant chili taco you had earlier. Maybe itβs that feeling that your content might be treading on thin ice. Maybe you watched Randβs recent Whiteboard Friday (How to Determine if a Page is “Low Quality” in Google’s Eyes) and just donβt know where to start.
In this edition of Next Level, Iβll show you how to start identifying your low-quality pages in a few simple steps with Moz Pro’s Site Crawl. Once identified, you can decide whether to merge, shine up, or remove the content.
A quick recap of algorithm updates
The latest big fluctuations in the search results were said to be caused by King Fred: enemy of low-quality pages and champion of the peopleβs right to find and enjoy content of value.
Fred took the fight to affiliate sites, and low-value commercial sites were also affected.
The good news is that even if this isnβt directed at you, and you havenβt taken a hit yourself, you can still learn from this update to improve your site. After all, why not stay on the right side of the biggest index of online content in the known universe? Youβll come away with a good idea of what content is working for your site, and you may just take a ride to the top of the SERPs. Knowledge is power, after all.
Be a Pro
Itβs best if we just accept that Google updates are ongoing; they happen all.the.time. But with a site audit tool in your toolkit like Moz Pro’s Site Crawl, they donβt have to keep you up at night. Our shiny new Rogerbot crawler is the new kid on the block, and itβs hungry to crawl your pages.
If you havenβt given it a try, sign up for a free trial for 30 days:
Set up your Moz Pro campaign β it takes 5 minutes tops β and Rogerbot will be unleashed upon your site like a caffeinated spider.
Rogerbot hops from page to page following links to analyze your website. As Rogerbot hops along, a beautiful database of pages is constructed that flag issues you can use to find those laggers. What a hero!
First stop: Thin content
Site Crawl > Content Issues > Thin Content
Thin content could be damaging your site. If itβs deemed to be malicious, then it could result in a penalty. Things like zero-value pages with ads or spammy doorway pages β little traps people set to funnel people to other pages β are bad news.
First off, letβs find those pages. Moz Pro Site Crawl will flag “thin content” if it has less than 50 words (excluding navigation and ads).
Now is a good time to familiarize yourself with Googleβs Quality Guidelines. Think long and hard about whether you may be doing this, intentionally or accidentally.
Youβre probably not straight-up spamming people, but you could do better and you know it. Our mantra is (repeat after me): βDoes this add value for my visitors?β Well, does it?
Ok, you can stop chanting now.
For most of us, thin content is less of a penalty threat and more of an opportunity. By finding pages with thin content, you have the opportunity to figure out if they’re doing enough to serve your visitors. Pile on some Google Analytics data and start making decisions about improvements that can be made.
Using moz.com as an example, Iβve found 3 pages with thin content. Ta-da emoji!
Iβm not too concerned about the login page or the password reset page. I am, however, interested to see how the local search page is performing. Maybe we can find an opportunity to help people who land on this page.
Go ahead and export your thin content pages from Moz Pro to CSV.
We can then grab some data from Google Analytics to give us an idea of how well this page is performing. You may want to look at comparing monthly data and see if there are any trends, or compare similar pages to see if improvements can be made.
I am by no means a Google Analytics expert, but I know how to get what I want. Most of the time that is, except when I have to Google it, which is probably every second week.
Firstly: Behavior > Site Content > All Pages > Paste in your URL
- Pageviews – The number of times that page has been viewed, even if itβs a repeat view.
- Avg. Time on Page – How long people are on your page
- Bounce Rate – Single page views with no interaction
For my example page, Bounce Rate is very interesting. This page lives to be interacted with. Its only joy in life is allowing people to search for a local business in the UK, US, or Canada. It is not an informational page at all. It doesnβt provide a contact phone number or an answer to a query that may explain away a high bounce rate.
Iβm going to add Pageviews and Bounce Rate a spreadsheet so I can track this over time.
Iβll also added some keywords that I want that page to rank for to my Moz Pro Rankings. That way I can make sure Iβm targeting searcher intent and driving organic traffic that is likely to convert.
Iβll also know if Iβm being out ranked by my competitors. How dare they, right?
As we’ve found with this local page, not all thin content is bad content. Another example may be if you have a landing page with an awesome video that’s adding value and is performing consistently well. In this case, hold off on making sweeping changes. Track the data youβre interested in; from there, you can look at making small changes and track the impact, or split test some ideas. Either way, you want to make informed, data-driven decisions.
Action to take for tracking thin content pages
Export to CSV so you can track how these pages are performing alongside GA data. Make incremental changes and track the results.
Second stop: Duplicate title tags
Site Crawl > Content Issues > Duplicate Title Tags
Title tags show up in the search results to give human searchers a taste of what your content is about. They also help search engines understand and categorize your content. Without question, you want these to be well considered, relevant to your content, and unique.
Moz Pro Site Crawl flags any pages with matching title tags for your perusal.
Duplicate title tags are unlikely to get your site penalized, unless youβve masterminded an army of pages that target irrelevant keywords and provide zero value. Once again, for most of us, itβs a good way to find a missed opportunity.
Digging around your duplicate title tags is a lucky dip of wonder. You may find pages with repeated content that you want to merge, or redundant pages that may be confusing your visitors, or maybe just pages for which you havenβt spent the time crafting unique title tags.
Take this opportunity to review your title tags, make them interesting, and always make them relevant. Because Iβm a Whiteboard Friday friend, I canβt not link to this title tag hack video. Turn off Netflix for 10 minutes and enjoy.
Pro tip: To view the other duplicate pages, make sure you click on the little triangle icon to open that up like an accordion.
Hey now, whatβs this? Filed away under duplicate title tags Iβve found these cheeky pages.
These are the contact forms we have in place to contact our help team. Yes, me included β hi!
Iβve got some inside info for you all. Weβre actually in the process of redesigning our Help Hub, and these tool-specific pages definitely need a rethink. For now, Iβm going to summon the powerful and mysterious rel=canonical tag.
This tells search engines that all those other pages are copies of the one true page to rule them all. Search engines like this, they understand it, and they bow down to honor the original source, as well they should. Visitors can still access these pages, and they wonβt ever know they’ve hit a page with an original source elsewhere. How very magical.
Action to take for duplicate title tags on similar pages
Use the rel=canonical tag to tell search engines that https://moz.com/help/contact is the original source.
Review visitor behavior and perform user testing on the Help Hub. Weβll use this information to make a plan for redirecting those pages to one main page and adding a tool type drop-down.
More duplicate titles within my subfolder-specific campaign
Because at Moz weβve got a heck of a lot of pages, Iβve got another Moz Pro campaign set up to track the URL moz.com/blog. I find this handy if I want to look at issues on just one section of my site at a time.
You just have to enter your subfolder and limit your campaign when you set it up.
Just remember we wonβt crawl any pages outside of the subfolder. Make sure you have an all-encompassing, all-access campaign set up for the root domain as well.
Not enough allowance to create a subfolder-specific campaign? You can filter by URL from within your existing campaign.
In my Moz Blog campaign, I stumbled across these little fellows:
https://moz.com/blog/whiteboard-friday-how-to-get-an-seo-job
https://moz.com/blog/whiteboard-friday-how-to-get-an-seo-job-10504
This is a classic case of new content usurping the old content. Instead of telling search engines, βYeah, so Iβve got a few pages and theyβre kind of the same, but this one is the one true page,β like we did with the rel=canonical tag before, this time Iβll use the big cousin of the rel=canonical, the queen of content canonicalization, the 301 redirect.
All the power is sent to the page you are redirecting to, as well as all the actual human visitors.
Action to take for duplicate title tags with outdated/updated content
Check the traffic and authority for both pages, then add a 301 redirect from one to the other. Consolidate and rule.
Itβs also a good opportunity to refresh the content and check whether it’s… what? I canβt hear you β adding value to my visitors! You got it.
Third stop: Duplicate content
Site Crawl > Content Issues > Duplicate Content
When the code and content on a page looks the same are the code and content on another page of your site, it will be flagged as Duplicate Content. Our crawler will flag any pages with 90% or more overlapping content or code as having duplicate content.
Officially, in the wise words of Google, duplicate content doesnβt incur a penalty. However, it can be filtered out of the index, so still not great.
Having said that, the trick is in the fine print. One botβs duplicate content is another botβs thin content, and thin content can get you penalized. Let me refer you back to our old friend, the Quality Guidelines.
Are you doing one of these things intentionally or accidentally? Do you want me to make you chant again?
If youβre being hounded by duplicate content issues and donβt know where to start, then weβve got more information on duplicate content on our Learning Center.
Iβve found some pages that clearly have different content on them, so why are these duplicate?
So friends, what we have here is thin content thatβs being flagged as duplicate.
There is basically not enough content on the page for bots to distinguish them from each other. Remember that our crawler looks at all the page code, as well as the copy that humans see.
You may find this frustrating at first: βLike, why are they duplicates?? They’re different, gosh darn it!β But once you pass through all the 7 stages of duplicate content and arrive at acceptance, youβll see the opportunity you have here. Why not pop those topics on your content schedule? Why not use the βqueenβ again, and 301 redirect them to a similar resource, combining the power of both resources? Or maybe, just maybe, you could use them in a blog post about duplicate content β just like I have.
Action to take for duplicate pages with different content
Before you make any hasty decisions, check the traffic to these pages. Maybe dig a bit deeper and track conversions and bounce rate, as well. Check out our workflow for thin content earlier in this post and do the same for these pages.
From there you can figure out if you want to rework content to add value or redirect pages to another resource.
This is an awesome video in the ever-impressive Whiteboard Friday series which talks about republishing. Seriously, youβll kick yourself if you donβt watch it.
Broken URLs and duplicate content
Another dive into Duplicate Content has turned up two Help Hub URLs that point to the same page.
These are no good to man or beast. They are especially no good for our analytics β blurgh, data confusion! No good for our crawl budget β blurgh, extra useless page! User experience? Blurgh, nope, no good for that either.
Action to take for messed-up URLs causing duplicate content
Zap this time-waster with a 301 redirect. For me this is an easy decision: add a 301 to the long, messed up URL with a PA of 1, no discussion. I love our new Learning Center so much that Iβm going to link to it again so you can learn more about redirection and build your SEO knowledge.
Itβs the most handy place to check if you get stuck with any of the concepts Iβve talked about today.
Wrapping up
While it may feel scary at first to have your content flagged as having issues, the real takeaway here is that these are actually neatly organized opportunities.
With a bit of tenacity and some extra data from Google Analytics, you can start to understand the best way to fix your content and make your site easier to use (and more powerful in the process).
If you get stuck, just remember our chant: “Does this add value for my visitors?β Your content has to be for your human visitors, so think about them and their journey. And most importantly: be good to yourself and use a tool like Moz Pro that compiles potential issues into an easily digestible catalogue.
Enjoy your chili taco and your good nightβs sleep!